Introduction to Model Validation
نویسنده
چکیده
The discipline of mathematical model validation is increasing in importance as the value of accurate models of physical systems increases. The fundamental activity of model validation is the comparison of predictions from a mathematical model of a system to the measured behavior of the system. This discussion motivates the need for model validation and introduces some preliminary elements of model validation. This is the first in a sequence of six tutorial presentations on model validation, and will introduce five presentations to follow. Motivation and Introduction Mathematical model validation is defined as the “process of determining the degree to which a computer model is an accurate representation of the real world from the perspective of the intended model applications.” (ASME, 2006, U.S. DOE, 2000, AIAA, 1998). It is accomplished through the comparison of predictions from a model to experimental results. There are numerous, related reasons for performing validations of mathematical models. For example: • There may be a need or desire to replace experimentation with model predictions, and, of course, a corresponding requirement that the model predictions have some degree of accuracy. The need for a model may arise from the fact that it is impossible to test system behavior or survivability in some regimes of operation. For example, a requirement of a building structure may be that it must survive a blast load, yet, for regulatory reasons, it may be impossible to test the structure in blast. • Alternately, the need for validation may arise from a necessity to prove the reliability of a structure under a broad range of operating conditions or environments. Because it may be expensive to simulate the conditions in the laboratory or realize them in the field, an accurate model is required. • Another reason for model validation is that a system may be undergoing changes in design that require analyses to assure that the design modifications yield acceptable system behavior. A validated model can be used to assess the system behavior. In all these situations it is useful to confirm analysts’ abilities to produce accurate models. The sections that follow introduce some ideas and terminology from model validation and list some steps that can be followed to perform a model validation. The paper introduces an example structure with a model to be validated, and carries it through steps involving planning, experiments, and validation comparisons. There are normally several parties or groups involved in performance of a validation. These are: • Analysts/modelers. These are persons capable of creating computational models from mathematical and conceptual models, when details of the latter are established. They are capable of anticipating the behaviors of computational models that include specific features. • Experimentalists. These are persons capable of planning and performing the calibration and validation experiments required in a validation. The experiments may be performed in the laboratory or field, and must normally be high precision experiments with high-accuracy measurements. • Validation analysts (Persons performing validation comparisons). These are persons knowledgeable about validation procedures, including means for comparing model predictions to experimental outcomes. They should possess intuition regarding the difficulty of obtaining positive validation results given various system measures of response and various means of comparison. Proceedings of the IMAC-XXVII February 9-12, 2009 Orlando, Florida USA ©2009 Society for Experimental Mechanics Inc. • Customers. These are the persons who authorize a validation analysis – the persons for whom a validation analysis is performed. They are critical to the validation specification because they understand the engineering decision that is required of a model. • Stakeholders. These are persons with an interest in the outcome of a validation comparison. All these parties or groups should cooperatively participate in the detailed specification of the validation plan. This is the first in a sequence of tutorial papers involving the validation of structural dynamic models. The other papers are included in these Proceedings and involve: • Selection of response features and adequacy criteria of structural dynamic systems that can be used to compare predictions from mathematical models to experimental results based on the requirements of the model. (Mayes, 2009a) • Uncertainty quantification (UQ) of the outcomes of laboratory and field experiments and the parameters and predictions of mathematical models using the theories of probability and statistics. (Paez and Swiler, 2009.) • UQ of the outcomes of laboratory and field experiments and the parameters and predictions of mathematical models using epistemic modeling and analysis. (Swiler and Paez, 2009.) • The role of model correlation and calibration in structural dynamics. (Mayes, 2009b) • An example of model validation based on experiments on a real structure that is modeled with a large, complex finite element model. (Mayes, et al., 2009) Validation – Definition and Operations The validation of a mathematical model of a structural dynamic system entails the comparison of predictions from the model to measured results from experiments. Before a well-structured validation comparison can be performed there are several decisions that must be made and criteria defined. This section is a brief summary of items to be considered before commencement of calibration and validation experiments, and model predictions. And those three activities must be completed before validation comparisons can be performed. The following is a list of activities and decisions – along with a brief description of each – to be completed prior to experimentation, modeling, and validation comparisons. Many of the terms used here have a specific meaning in the framework of validation; those terms are given in italics and their definitions are taken from the Guide for Verification and Validation in Computational Solid Mechanics (ASME, 2006). Although the activities and decisions listed here are given in order, there is interplay among elements that may require iterative modification to obtain a realistic and useful plan for validation. The topics to be covered are: • Specify the model use and purpose (What decision is to be made?) • Specify validation experiments • Specify the conceptual model • Specify the mathematical model • Specify the computational model • Specify the physical system response measures of interest • Specify the validation metrics • Specify the domain of comparison • Specify the calibration experiments • Specify the adequacy criteria (validation requirements) In addition to the listing of activities and definition of terms, an example that shows how the steps might be applied is included. 1. Specify the model use and purpose (What decision is to be made”): The use and purpose of the model refer to the applications for which the model is developed. The current model and its validation may be one in a sequence of validation comparisons that will be performed to, eventually, validate a model for the ultimate system of interest. The ultimate system of interest is the physical system and its associated environment for which a computational model is being developed. It is useful to identify the current model and validation comparison relative to the ultimate system of interest because a particular level of accuracy will be required of the computational model of the ultimate system of interest and the accuracy requirements of the current model must be compatible with the accuracy requirements of the model of the ultimate system of interest. The decision to be made normally considers whether model predictions are sufficiently accurate. Example: The ultimate system of interest is an ensemble of shell-payload structures with interior hardware that will experience random vibration environments. Each shell-payload structure consists of an external layered shell plus interior components. The external layered shell consists of an exterior composite shell bonded to an internal aluminum shell. The interior components are substructure elements with mechanical properties. Prior to validation of the ultimate system of interest it is desired to validate a model of the external layered shell, only. Figure 1 shows a schematic of the external layered shell, and the shell with interior components. The hierarchy includes two elements and difficulty of model validation increases from the first to the second. The validation considered here is the validation of the computational model of the shell structure. Figure 1. Validations to be performed during development of computational model of shell-payload structure. The schematic on the left denotes the shell structure. The schematic on the right denotes the shell structure with interior components. 2. Specify validation experiments: Early in the planning process validation experiments must be specified. A validation experiment is an experiment that is designed and performed to generate data for the purpose of model validation. The validation experiments must be specified early in the planning process because details of the computational model to be developed for making predictions of validation experiment results will be specified during the validation planning phase, and the computational model needs to be sufficiently detailed, and contain the appropriate elements to make the required predictions with acceptable accuracy It is emphasized that the validation experiments defined here differ from calibration experiments defined below. The calibration experiments are used to develop models for materials, sub-components, and connections in the physical system currently modeled. Example: The validation experiments will subject eight nominally identical shell structures, described above, to base-excited, stationary random vibration. The random vibration load will be a band-limited white noise with frequency content in [10,2000] Hz, and it will be generated using an electrodynamic shaker. The input spectral density level is . Each structure will be subjected to the random vibration environment for a duration of thee minutes. The structures will be attached to the electrodynamic shaker via a fixture during testing. Uniaxial acceleration responses will be measured in the axial direction, at three locations along the length of the shell, on the inside of the shell. Figure 2 shows a schematic of one structure under random vibration load. The load is primarily uni-directional and input at the base of the test structure. The squares denote the locations of the accelerometers. Hz / g . 2 2 10 25 5 − × Figure 2. Shell structure subjected to random vibration load. 3. Specify the conceptual model: The conceptual model of the physical system is the set of assumptions and descriptions of physical processes representing the behavior of the system of interest, and from which the mathematical model and the validation experiments can be constructed. Example: The shell structures are, of course, nonlinear, to some extent, as are practically all real structures. But it is thought that the degree of nonlinearity is slight. In addition, the ensemble of structures from which the eight test structures are drawn is random in practically every respect. Material properties are random; the geometries of the outer shell and internal aluminum shell are random; the thickness of the bond layer is random; adhesion of the bond material to the outer shell and the internal shell is imperfect and random, etc. But expert opinion holds that the major source of randomness is the material properties of the bonding material that bonds the external shell to the internal aluminum shell; and those material properties are the only quantities conceptually considered to be random. Therefore, the bonding material modulus of elasticity and shear modulus, E and G, are modeled as jointly distributed random variables. All other facets of the physical system are conceptually considered capable of being accurately modeled as deterministic with nominal system dimensions and nominal material parameters. 4. Specify the mathematical model: The mathematical model is the mathematical equations, boundary conditions, excitations, initial conditions, and other modeling data needed to describe the conceptual model. The mathematical model must be specified not only as a prelude to specification of the computational model, but also to assure that all phenomenology anticipated in the validation experiments is included. When the computational model is to be implemented in an uncertainty quantification framework, the parameters and functions in the mathematical model intended to simulate the uncertainty are specified here. Example: Within the frequency range of interest ([10,2000] Hz), and at the excitation levels specified, and at the corresponding response levels anticipated, it is assumed sufficiently accurate to model the structure with linear, partial differential equations. An analysis that does not require specification of initial conditions will be specified, so no initial conditions need be specified here. The excitation is a stationary, random, externally enforced acceleration, with spectral density provided in item 2, above. Because the bonding material modulus of elasticity and shear modulus are assumed random, a probability model for their joint realizations will be developed using data obtained during calibration experiments. Both the modulus of elasticity and the shear modulus of the bonding layer are modeled as spatially constant throughout the bonding layer field. 5. Specify the computational model: The computational model is the numerical implementation of the mathematical model, usually in the form of numerical discretization, solution algorithm, and convergence criteria. The computational model must be carefully defined (following code verification and solution verification) to assure that the features of the mathematical model are captured with sufficient accuracy to guarantee positive validation results if, indeed, the model should be validated. When the computational model is implemented in the uncertainty quantification framework, the specific implementations that permit simulation of uncertainty are specified here. Example: The system will be modeled with a finite element model (FEM). The code to be used is Salinas, a Sandia National Laboratories-developed code for the simulation of linear (mostly) structural dynamics. (See Reese, et al., 2004) The outer shell, the inner aluminum shell, and the bonding layer, are all modeled using solid elements. Solution verification (See Roache, 1998) indicates that an FEM with approximately 1.1 million nodes is adequately converged to yield computation accuracy compatible with the measurement accuracy of the transducers used to make experimental measurements. The FEM is deterministic, i.e., during any code run all model data – system geometry, parameters, loads, etc. – must be specified as constants. Probability analysis using the FEM is accomplished via Monte Carlo simulation. That is, randomness is introduced into the analysis by generating samples of the random modulus of elasticity and shear modulus from their probability model, then introducing them to the analysis via model data of the FEM. Each set of inputs – modulus of elasticity and shear modulus – yields a different response. The collection of responses forms the ensemble of the random response. (Some ideas of probability modeling are introduced in Paez and Swiler (2009), and some ideas of uncertainty quantification that is non-probabilistic are introduced in Swiler and Paez (2009), two papers in this tutorial sequence.) The ensemble of the random response can be used to develop a probability model of the response measures. The computational model is shown schematically in Figure 4, along with the FEM equations of motion in matrix form. q kx x c x m + + = & & & & m mass matrix c damping matrix k stiffness matrix q force x response displacement Dots denote differentiation with respect to time. Figure 3. Schematic of the computational model and matrix equation of motion. 6. Specify the physical system response measures of interest: The response measures of interest are the quantities that are functions of system behavior or response to be used in the comparison of model predictions to experimental system predictions. They need to be quantities that can be inferred from measured experimental excitations and responses, and from specified model excitations and computed model responses. Example: The response measures of interest are obtained from the spectral densities of the responses at the three experimental measurement locations, denoted by squares in Figure 2. An acceleration record is obtained from each accelerometer, therefore, three spectral densities are computed. The spectral density of a random process defines the distribution of mean square signal content in the frequency domain. (See Wirsching, Paez, Ortiz, 1995.) The area under the spectral density function of a random process is the total mean square of a random process. Two response measures are defined for each spectral density. The first is the square root of the area under the spectral density curve over the frequency range [0,300] Hz; the second is the square root of the area under the spectral density curve over the frequency range [300,2000] Hz. The first response measure is the root-mean-square (RMS) of response due to “modes” in the response, up to 300 Hz; the second response measure is the RMS of response due to “modes” in the response in [300,2000] Hz. A typical acceleration response spectral density excited by a random vibration environment like the validation test environment is shown in Figure 4, along with the definitions of the response measures of interest, and the actual response measures from the spectral density curve. Figure 4. A typical acceleration response spectral density excited by a band-limited white noise acceleration environment. Definitions of the response measures of interest. Response measures for the spectral density shown. 7. Specify validation metrics: These are the precise mathematical means for comparing model predicted response measures to response measures computed from experimental responses. Example: There are numerous means for accomplishing validation comparisons. Of course, when results are available from multiple experiments, the approach should involve comparisons based on the theories of probability and statistics. Two of the papers in this tutorial sequence (Paez and Swiler, 2009, and Swiler and Paez, 2009) discuss methods for comparison of model predictions to experimental outcomes when both the predictions and the outcomes are uncertain. For now, we refer those interested in learning about such methods of comparison to those papers, and note that both the response measures from the eight validation experiments and multiple response measures from the model predictions will be computed and graphed. 8. Specify the domain of comparison: The domain of comparison is the region of environment space and model and physical system parameter space within which experiment responses will be measured and model predictions will be made. Once a validation comparison is made over a particular domain, the results of the comparison are normally specified only for that domain, and the model is used only in that domain unless a careful extrapolation implies that the results are useful over an extended domain. Example: The validation test environment is specified as a stationary random vibration with root-mean-square (RMS) value 10.2 g, and with frequency content up to about 2000 Hz. Figure 4 shows that the environment excites responses in the structure up to about 12 g’s RMS. Assuming that if a computational model that simulates the system at these levels could also do so at lower levels, the domain of comparison can be thought of as random vibration excitations up to 10 g RMS, with signal content up to 2000 Hz, exciting responses up to about 12 g’s RMS. 9. Specify the calibration experiments: Calibration experiments are the experiments to be performed and used with the structural model to identify model parameters. Once the calibration experiment results are used to infer values for model parameters, they cannot be used to make inferences of model validity. Example: Calibration experiments are required to develop the probability model for the bond material parameters, E and G, the modulus of elasticity and the shear modulus. Details of how this is accomplished are contained in Paez and Swiler (2009), and a summary of those results is provided in the section entitled “Perform Calibration Experiments,” below. However, note that the calibration experiments must be specified prior to performance of the validation experiments and construction of the computational model. 10. Specify adequacy criteria (validation requirements): ( ) ( ) g . df f G R g . df f G R
منابع مشابه
Validation of Primary Health Care Governance Model of Iran Using the Delphi Method
Validation of Primary Health Care Governance Model of Iran Using the Delphi Method Jaafarsadegh Tabrizi1, Raana Gholamzadeh Nikjoo2* 1MD, Phd, Department of Health policy and Management, Health Services Management Research Center, Faculty of Management and Medical Informatic, Tabriz University of Medical Sciences 2Phd, Department of Health Policy and Management, Iranian center of excellence in...
متن کاملValidation of the Empowerment Model of Midwives Working in the Maternity Ward of Tehran Educational Hospitals
Introduction: Today, the issue of staff empowerment is in the spotlight because capable, skilled and better motivated staff will be able to adapt to change. Also, capable behaviors in hospitals can lead to improvements in health care. Method: The research was conducted to validate the model of empowerment of midwives working in the maternity ward of Tehran teaching hospitals. To validate 20 spe...
متن کاملDesign and Validation of the Internationalization Model of Higher Education in Medical Sciences Universities
Introduction: Internationalization of higher education is an active and creative response towards the globalization phenomenon. The process of internationalization according to national upper documents in the country is assigned as a priority and necessity. The objective of the study is to design and evaluate the model of internationalizing higher education in the state universities of medical ...
متن کاملDevelopment and Validation of Macroergonomic Factors Based on the Holden and Karsh Model: A Case Study in Automotive Industry
Introduction: Complex sociotechnical systems, such as automotive industry, require a proper macro-ergonomic approach to design and implement the work system at micro-ergonomic level. The purpose of this study was to develop and validate effective macroeconomic factors to improve productivity, health and safety of employees in the automotive industry based on Holden & Karsh model. Material and ...
متن کاملValidation of a Self- directed Learning Readiness Scale for Medical and Dentistry Students
Introduction: Faculty members, being in charge of cultivating students’ life-long learning capabilities, need a valid and reliable tool to assess self-directed learning readiness among them.The aim of this study was to examine assess reliability and construct validity of the Self Directed Learning Readiness Scale (SDLRS).for pre-internship medical and dentistry students. Methods: In a cross-s...
متن کاملLong-term Streamflow Forecasting by Adaptive Neuro-Fuzzy Inference System Using K-fold Cross-validation: (Case Study: Taleghan Basin, Iran)
Streamflow forecasting has an important role in water resource management (e.g. flood control, drought management, reservoir design, etc.). In this paper, the application of Adaptive Neuro Fuzzy Inference System (ANFIS) is used for long-term streamflow forecasting (monthly, seasonal) and moreover, cross-validation method (K-fold) is investigated to evaluate test-training data in the model.Then,...
متن کامل